Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FinGPT_Forecaster error. #119

Open
teknightstick opened this issue Nov 10, 2023 · 1 comment
Open

FinGPT_Forecaster error. #119

teknightstick opened this issue Nov 10, 2023 · 1 comment

Comments

@teknightstick
Copy link

I am running windows 10 in a conda environment.

I have an nvidia 3090 ti with 24 gigs of ram. I have the latest drivers and Cuda toolkit installed.

I set the .env

image

It will run and load gradio.

image

But when I click submit this occurs.

image

(fingpt) PS C:\Ai\FinGPT\fingpt\FinGPT_Forecaster> python app.py Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.66it/s] C:\Users\bulle\.conda\envs\fingpt\lib\site-packages\transformers\utils\hub.py:373: FutureWarning: The use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 641, in LoadNvmlLibrary
nvmlLib = CDLL(os.path.join(os.getenv("ProgramFiles", "C:/Program Files"), "NVIDIA Corporation/NVSMI/nvml.dll"))
File "C:\Users\bulle.conda\envs\fingpt\lib\ctypes_init
.py", line 373, in init
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll' (or one of its dependencies). Try using the full path with constructor syntax.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 427, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1486, in process_api
result = await self.call_function(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1108, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\utils.py", line 665, in wrapper
response = f(*args, **kwargs)
File "app.py", line 253, in predict
print_gpu_utilization()
File "app.py", line 54, in print_gpu_utilization
nvmlInit()
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 608, in nvmlInit
_LoadNvmlLibrary()
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 646, in _LoadNvmlLibrary
_nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 310, in _nvmlCheckReturn
raise NVMLError(ret)
pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 641, in LoadNvmlLibrary
nvmlLib = CDLL(os.path.join(os.getenv("ProgramFiles", "C:/Program Files"), "NVIDIA Corporation/NVSMI/nvml.dll"))
File "C:\Users\bulle.conda\envs\fingpt\lib\ctypes_init
.py", line 373, in init
self._handle = _dlopen(self._name, mode)
FileNotFoundError: Could not find module 'C:\Program Files\NVIDIA Corporation\NVSMI\nvml.dll' (or one of its dependencies). Try using the full path with constructor syntax.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 427, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1486, in process_api
result = await self.call_function(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1108, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\utils.py", line 665, in wrapper
response = f(*args, **kwargs)
File "app.py", line 253, in predict
print_gpu_utilization()
File "app.py", line 54, in print_gpu_utilization
nvmlInit()
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 608, in nvmlInit
_LoadNvmlLibrary()
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 646, in _LoadNvmlLibrary
_nvmlCheckReturn(NVML_ERROR_LIBRARY_NOT_FOUND)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\pynvml.py", line 310, in _nvmlCheckReturn
raise NVMLError(ret)
pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 472, in process_events
response = await self.call_prediction(awake_events, batch)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 436, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None
`

Chat gpt suggest that my nvml.dll file is the issue found an article on it.

https://www.nvidia.com/en-us/geforce/forums/game-ready-drivers/13/295198/nvmldll-and-nvsmi-folder-missing/

I did that And then after i clicked submit it looks like it got further than last time.

Now I get another different error...Progress!

Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.50it/s] C:\Users\bulle\.conda\envs\fingpt\lib\site-packages\transformers\utils\hub.py:373: FutureWarning: The use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
GPU memory occupied: 6938 MB.
[100%%*] 1 of 1 completed
Inputs loaded onto devices.
[INST]<>
You are a seasoned stock market analyst. Your task is to list the positive developments and potential concerns for companies based on relevant news and
basic financials from the past weeks, then provide an analysis and prediction for the companies' stock price movement for the upcoming week. Your answer format should be as follows:

[Positive Developments]:

  1. ...

[Potential Concerns]:

  1. ...

[Prediction & Analysis]
Prediction: ...
Analysis: ...
<>

[Company Introduction]:

Apple Inc is a leading entity in the Technology sector. Incorporated and publicly traded since 1980-12-12, the company has established its reputation as one of the key players in the market. As of today, Apple Inc has a market capitalization of 2882298.26 in USD, with 15634.23 shares outstanding.

Apple Inc operates primarily in the US, trading under the ticker AAPL on the NASDAQ NMS - GLOBAL MARKET. As a dominant force in the Technology space, the company continues to innovate and drive progress within the industry.

From 2023-10-20 to 2023-10-27, AAPL's stock price decreased from 172.88 to 168.22. Company news during this period are listed below:

[Headline]: Why Apple could be the big winner in its rocky partnership with Goldman Sachs
[Summary]: The march toward a tech-driven financial sector seems inevitable.

[Headline]: Apple raises prices for Arcade gaming subscription service, AppleTV+ streaming
[Summary]: Apple Inc. is raising the prices for its AppleTV+ streaming and Arcade gaming plans as well as its bundled Apple One service that includes streaming, music and other subscriptions. Arcade will now cost $6.99, up from $4.99. AppleTV+ is now $9.99, up from $6.99.

[Headline]: 'Diworsification': The Mythical Boogeyman Of The Unconcentrated Portfolio
[Summary]: Diversification is important to protect against random negative events, but doesn't guarantee success. Find out more about a well-constructed, diversified portfolio.

[Headline]: Apple Inc. stock outperforms competitors despite losses on the day
[Summary]: Shares of Apple Inc. slipped 2.46% to $166.89 Thursday, on what proved to be an all-around grim trading session for the stock market, with the NASDAQ...

[Headline]: Q4 2023 Zedge Inc Earnings Call
[Summary]: Q4 2023 Zedge Inc Earnings Call

From 2023-10-27 to 2023-11-03, AAPL's stock price increased from 168.22 to 176.65. Company news during this period are listed below:

[Headline]: India opposition accuses govt of trying to hack lawmakers' iPhones
[Summary]: Indian opposition leader Rahul Gandhi on Tuesday accused Prime Minister Narendra Modi's government of trying to hack into senior opposition politicians' mobile phones, after they reported receiving warning messages from Apple. Some of the lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: "Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID". "Hack us all you want," Gandhi told a news conference in New Delhi, in reference to Modi.

[Headline]: Stocks mixed into Fed meeting, new Apple MacBooks, Caterpillar and Pfizer earnings, AMD on Deck - Five Things To Know
[Summary]: Five things you need to know before the market opens on Tuesday October 31: 1. -- Stock Market Today: Stocks mixed, Treasury yields steady
ahead of Fed meeting Stocks are on pace for their worst month of the year this October, with higher Treasury yields, mounting geopolitical risks and muted big tech earnings holding down gains.

[Headline]: Apple shows off MacBooks with M3 chip at Scary Fast event
[Summary]: Apple (AAPL) announced its latest tech products including the latest generation of Macs and MacBook Pros fitted with its new line of M3 chips, during its Scary Fast virtual event. Apple notably expanded its marketing, during the event, to try and attract new audiences including gamers by showing off the chip's ray tracing capabilities. Yahoo Finance Technology Editor Dan Howley joins the Live show to break down his coverage of the event, what he thinks about Apple's latest announcements, and what they mean for investors. For more expert insight and the latest market action, click here to watch this full episode of Yahoo Finance Live.

[Headline]: Time to Buy Apple (AAPL) or Qualcomm's (QCOM) Stock as Earnings Approach?
[Summary]: Investors will certainly be hoping these iconic tech partners can post strong quarterly results that give both of their stocks a boost.

[Headline]: Apple earnings: Everything investors are watching
[Summary]: Apple (AAPL) is set to report earnings after the bell on Thursday, November 2. There are several key items that investors will be keeping an
eye on during this announcement. Yahoo Finance spoke to experts and analysts across the industry to break down the most important things to know. Kineo
Capital Managing Partner Jim Strugger broke down a potential option trade for investors to watch based on Apple earnings. Strugger explained, "The outcome we're structuring for is for the stock to move very little after earnings ... really taking advantage of pumped up implied volatility and selling that to the people that believe that Apple could move very sharply either up or down." Yahoo Finance Technology Editor Dan Howley discussed all of the details of Apple's new products and how they may impact earnings. Howley said, "They did announce some new chips and some new laptops, as well as a new iMac ... For the current quarter that's about to be reported though, Apple sales for laptops supposed to be lower, for desktops supposed to be lower. Overall, it looks as though this is going to help improve the Mac revenue. We just have to see how much of an impact it has." Creative Strategies CEO and Principal Analyst Ben Bajarin also discussed Apple's new product line and some of the challenges that Apple is facing ahead of earnings. Bajarin noted, "They are up against some supply constraints, particularly as they're moving to a new process technology for the high end iPhones ... looking at ASPs and gross margins will be very telling about if that new buyer base is skewing toward the higher end, which is what I think has been the trend over the last couple years." Video highlights: 00:00:03 - Kineo Capital Managing Partner Jim Strugger 00:00:40 - Yahoo Finance Technology Editor Dan Howley 00:01:14 -
Creative Strategies CEO and Principal Analyst Ben Bajarin

From 2023-11-03 to 2023-11-09, AAPL's stock price increased from 176.65 to 182.41. Company news during this period are listed below:

[Headline]: China’s President Xi to meet business executives in Silicon Valley: report
[Summary]: Chinese President Xi Jinping is scheduled to sit down with hundreds of business leaders over dinner next week when the Asia-Pacific Economic
Cooperation...

[Headline]: iMac 24-inch M3 review: Performance, specs, price
[Summary]: Apple's 2023 update to the 24-inch iMac gives a nice performance bump to the all-in-one Mac, but the M3 hardware is better suited to those moving from Intel than for users with the M1 version.

[Headline]: Nobody on Wall Street wants to bet against the ‘Magnificent Seven’
[Summary]: Almost nobody on Wall Street has the temerity to bet against the "Magnificent Seven" group of tech stocks.

[Headline]: 2 Under-the-Radar Tech Stocks To Buy in 2023
[Summary]: There are thousands of tech stocks for investors to choose from, but most tend to focus on just a handful of names. The good news is that stepping just outside of that small bubble of popular tech stocks can yield excellent diversification and some attractive values. If you like Apple for its tech innovation, then consider owning its less-celebrated rival Garmin (NYSE: GRMN).

[Headline]: Apple Stock Rebounds: How To Invest In Tech Sector Strength And See Money Flows
[Summary]: It’s time to put your money back to work in some ETFs. Asbury's John Kosar says which ones are worth the investment right now.

[Basic Financials]:

No basic financial reported.

Based on all the information before 2023-11-10, let's first analyze the positive developments and potential concerns for AAPL. Come up with 2-4 most important factors respectively and keep them concise. Most factors should be inferred from company related news. Then make your prediction of the AAPL stock price movement for next week (2023-11-10 to 2023-11-17). Provide a summary analysis to support your Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 427, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1486, in process_api
result = await self.call_function(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1108, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\utils.py", line 665, in wrapper
response = f(*args, **kwargs)
File "app.py", line 264, in predict
res = model.generate(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\peft\peft_model.py", line 975, in generate
outputs = self.base_model.generate(**kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\generation\utils.py", line 1642, in generate
return self.sample(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\generation\utils.py", line 2724, in sample
outputs = self(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\accelerate\hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 809, in forward
outputs = self.model(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 697, in forward
layer_outputs = decoder_layer(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 413, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 310, in forward
query_states = self.q_proj(hidden_states)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\peft\tuners\lora.py", line 902, in forward
result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
RuntimeError: "addmm_impl_cpu
" not implemented for 'Half'
Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 427, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1486, in process_api
result = await self.call_function(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\blocks.py", line 1108, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\utils.py", line 665, in wrapper
response = f(*args, **kwargs)
File "app.py", line 264, in predict
res = model.generate(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\peft\peft_model.py", line 975, in generate
outputs = self.base_model.generate(**kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\generation\utils.py", line 1642, in generate
return self.sample(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\generation\utils.py", line 2724, in sample
outputs = self(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\accelerate\hooks.py", line 164, in new_forward
output = module._old_forward(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 809, in forward
outputs = self.model(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 697, in forward
layer_outputs = decoder_layer(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 413, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\transformers\models\llama\modeling_llama.py", line 310, in forward
query_states = self.q_proj(hidden_states)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\peft\tuners\lora.py", line 902, in forward
result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
RuntimeError: "addmm_impl_cpu
" not implemented for 'Half'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 472, in process_events
response = await self.call_prediction(awake_events, batch)
File "C:\Users\bulle.conda\envs\fingpt\lib\site-packages\gradio\queueing.py", line 436, in call_prediction
raise Exception(str(error) if show_error else None) from error
Exception: None`

I tried to be as detailed as possible as I get the same error on 2 different windows pc with nvidia cards.

side note Gradio is missing from the requirements.txt

@Noir97
Copy link
Member

Noir97 commented Nov 11, 2023

Glad you're making progress. The Gradio app.py and requirements.txt are only tested for our HuggingFace Space, thus might not work well across all platforms. Btw the sdk version of gradio we use can be found in the huggingface README.md

For your latest issue, I guess your model is not properly loaded on GPU. Maybe try load it manually onto GPU instead of using device_map="auto".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants